知识图完成(又称〜链接预测),即〜从知识图推断缺失信息的任务是许多应用程序中广泛使用的任务,例如产品建议和问题答案。知识图嵌入和/或规则挖掘和推理的最新方法是数据驱动的,因此仅基于输入知识图所包含的信息。这导致了不令人满意的预测结果,这使得这种解决方案不适用于关键领域,例如医疗保健。为了进一步提高知识图完成的准确性,我们建议将知识图嵌入的数据驱动的能力与专家或累积制度(例如OWL2)引起的域特定于域的推理。通过这种方式,我们不仅使用可能不包含在输入知识图中的域知识增强了预测准确性,而且还允许用户插入自己的知识图嵌入和推理方法。我们的最初结果表明,我们通过最多3倍和优于混合解决方案来增强香草知识图嵌入的MRR准确性,这些溶液将知识图嵌入与规则挖掘和推理高达3.5倍MRR相结合。
translated by 谷歌翻译
随着变压器模型的激增,许多人研究了注意力如何在学习的表示上。但是,对于语义解析等特定任务,仍然忽略了注意力。句子含义形式表示的一种流行方法是抽象含义表示(AMR)。到目前为止,句子及其AMR表示之间的对齐方式已通过不同的方式进行探索,例如通过规则或通过期望最大化(EM)算法。在本文中,我们研究了基于变压器的解析模型在没有临时策略的情况下产生有效对齐的能力。我们通过句子跨度和图中的语义单元之间对齐,对AMR进行了对AMR的交叉注意事项的第一次深入探索。我们展示了当前基于变压器的解析器如何隐式编码交叉注意权重中的对齐信息以及如何利用它来提取这种比对。此外,我们使用对齐方式监督和指导交叉注意,从而删除对英语和特定于AMR的规则的需求。
translated by 谷歌翻译
我们制定了一种评估给定两组图像的生成网络性能的度量。当前用于执行此操作的流行绩效指标是Fr \'Echet Inception距离(FID)。 FID假设使用Inception-V3的倒数第二层遵循高斯分布来特征的图像,如果我们希望将FID用作度量标准,则不会违反这种假设。但是,我们表明,ImakeNet数据集的Inception-V3特征不是高斯。特别是,每个边缘都不是高斯。为了解决这个问题,我们使用高斯混合模型(GMM)对特征图像进行建模,并计算限于GMM的2-Wasserstein距离。我们通过使用Inception-V3(或其他分类器)在两组图像上定义了一个称为WAM的性能度量,以表征图像,估算两个GMM,并使用受限的$ 2 $ - WASSERSTEIN距离比较GMMS。我们通过实验表明WAM比FID的优势,包括FID比WAM对不可察觉的图像扰动更敏感。通过建模从Inception-V3作为GMM获得的非高斯特征并使用GMM度量,我们可以更准确地评估生成网络性能。
translated by 谷歌翻译
Existing automated techniques for software documentation typically attempt to reason between two main sources of information: code and natural language. However, this reasoning process is often complicated by the lexical gap between more abstract natural language and more structured programming languages. One potential bridge for this gap is the Graphical User Interface (GUI), as GUIs inherently encode salient information about underlying program functionality into rich, pixel-based data representations. This paper offers one of the first comprehensive empirical investigations into the connection between GUIs and functional, natural language descriptions of software. First, we collect, analyze, and open source a large dataset of functional GUI descriptions consisting of 45,998 descriptions for 10,204 screenshots from popular Android applications. The descriptions were obtained from human labelers and underwent several quality control mechanisms. To gain insight into the representational potential of GUIs, we investigate the ability of four Neural Image Captioning models to predict natural language descriptions of varying granularity when provided a screenshot as input. We evaluate these models quantitatively, using common machine translation metrics, and qualitatively through a large-scale user study. Finally, we offer learned lessons and a discussion of the potential shown by multimodal models to enhance future techniques for automated software documentation.
translated by 谷歌翻译
We are witnessing a widespread adoption of artificial intelligence in healthcare. However, most of the advancements in deep learning (DL) in this area consider only unimodal data, neglecting other modalities. Their multimodal interpretation necessary for supporting diagnosis, prognosis and treatment decisions. In this work we present a deep architecture, explainable by design, which jointly learns modality reconstructions and sample classifications using tabular and imaging data. The explanation of the decision taken is computed by applying a latent shift that, simulates a counterfactual prediction revealing the features of each modality that contribute the most to the decision and a quantitative score indicating the modality importance. We validate our approach in the context of COVID-19 pandemic using the AIforCOVID dataset, which contains multimodal data for the early identification of patients at risk of severe outcome. The results show that the proposed method provides meaningful explanations without degrading the classification performance.
translated by 谷歌翻译
User equipment is one of the main bottlenecks facing the gaming industry nowadays. The extremely realistic games which are currently available trigger high computational requirements of the user devices to run games. As a consequence, the game industry has proposed the concept of Cloud Gaming, a paradigm that improves gaming experience in reduced hardware devices. To this end, games are hosted on remote servers, relegating users' devices to play only the role of a peripheral for interacting with the game. However, this paradigm overloads the communication links connecting the users with the cloud. Therefore, service experience becomes highly dependent on network connectivity. To overcome this, Cloud Gaming will be boosted by the promised performance of 5G and future 6G networks, together with the flexibility provided by mobility in multi-RAT scenarios, such as WiFi. In this scope, the present work proposes a framework for measuring and estimating the main E2E metrics of the Cloud Gaming service, namely KQIs. In addition, different machine learning techniques are assessed for predicting KQIs related to Cloud Gaming user's experience. To this end, the main key quality indicators (KQIs) of the service such as input lag, freeze percent or perceived video frame rate are collected in a real environment. Based on these, results show that machine learning techniques provide a good estimation of these indicators solely from network-based metrics. This is considered a valuable asset to guide the delivery of Cloud Gaming services through cellular communications networks even without access to the user's device, as it is expected for telecom operators.
translated by 谷歌翻译
Previous work has shown the potential of deep learning to predict renal obstruction using kidney ultrasound images. However, these image-based classifiers have been trained with the goal of single-visit inference in mind. We compare methods from video action recognition (i.e. convolutional pooling, LSTM, TSM) to adapt single-visit convolutional models to handle multiple visit inference. We demonstrate that incorporating images from a patient's past hospital visits provides only a small benefit for the prediction of obstructive hydronephrosis. Therefore, inclusion of prior ultrasounds is beneficial, but prediction based on the latest ultrasound is sufficient for patient risk stratification.
translated by 谷歌翻译
Visual representations can be defined as the activations of neuronal populations in response to images. The activation of a neuron as a function over all image space has been described as a "tuning landscape". As a function over a high-dimensional space, what is the structure of this landscape? In this study, we characterize tuning landscapes through the lens of level sets and Morse theory. A recent study measured the in vivo two-dimensional tuning maps of neurons in different brain regions. Here, we developed a statistically reliable signature for these maps based on the change of topology in level sets. We found this topological signature changed progressively throughout the cortical hierarchy, with similar trends found for units in convolutional neural networks (CNNs). Further, we analyzed the geometry of level sets on the tuning landscapes of CNN units. We advanced the hypothesis that higher-order units can be locally regarded as isotropic radial basis functions, but not globally. This shows the power of level sets as a conceptual tool to understand neuronal activations over image space.
translated by 谷歌翻译
Iterative regularization is a classic idea in regularization theory, that has recently become popular in machine learning. On the one hand, it allows to design efficient algorithms controlling at the same time numerical and statistical accuracy. On the other hand it allows to shed light on the learning curves observed while training neural networks. In this paper, we focus on iterative regularization in the context of classification. After contrasting this setting with that of regression and inverse problems, we develop an iterative regularization approach based on the use of the hinge loss function. More precisely we consider a diagonal approach for a family of algorithms for which we prove convergence as well as rates of convergence. Our approach compares favorably with other alternatives, as confirmed also in numerical simulations.
translated by 谷歌翻译
With the ever-growing model size and the limited availability of labeled training data, transfer learning has become an increasingly popular approach in many science and engineering domains. For classification problems, this work delves into the mystery of transfer learning through an intriguing phenomenon termed neural collapse (NC), where the last-layer features and classifiers of learned deep networks satisfy: (i) the within-class variability of the features collapses to zero, and (ii) the between-class feature means are maximally and equally separated. Through the lens of NC, our findings for transfer learning are the following: (i) when pre-training models, preventing intra-class variability collapse (to a certain extent) better preserves the intrinsic structures of the input data, so that it leads to better model transferability; (ii) when fine-tuning models on downstream tasks, obtaining features with more NC on downstream data results in better test accuracy on the given task. The above results not only demystify many widely used heuristics in model pre-training (e.g., data augmentation, projection head, self-supervised learning), but also leads to more efficient and principled fine-tuning method on downstream tasks that we demonstrate through extensive experimental results.
translated by 谷歌翻译